Goto

Collaborating Authors

 greek language


Greek2MathTex: A Greek Speech-to-Text Framework for LaTeX Equations Generation

Gkritzali, Evangelia, Kaliosis, Panagiotis, Galanaki, Sofia, Palogiannidi, Elisavet, Giannakopoulos, Theodoros

arXiv.org Artificial Intelligence

In the vast majority of the academic and scientific domains, LaTeX has established itself as the de facto standard for typesetting complex mathematical equations and formulae. However, LaTeX's complex syntax and code-like appearance present accessibility barriers for individuals with disabilities, as well as those unfamiliar with coding conventions. In this paper, we present a novel solution to this challenge through the development of a novel speech-to-LaTeX equations system specifically designed for the Greek language. We propose an end-to-end system that harnesses the power of Automatic Speech Recognition (ASR) and Natural Language Processing (NLP) techniques to enable users to verbally dictate mathematical expressions and equations in natural language, which are subsequently converted into LaTeX format. We present the architecture and design principles of our system, highlighting key components such as the ASR engine, the LLM-based prompt-driven equations generation mechanism, as well as the application of a custom evaluation metric employed throughout the development process. We have made our system open source and available at https://github.com/magcil/greek-speech-to-math.


NLP for The Greek Language: A Longer Survey

Papantoniou, Katerina, Tzitzikas, Yannis

arXiv.org Artificial Intelligence

There is a wide variety of methods, tools and resources for processing text in the English language. However this is not the case for the Greek language even though it has a long documented history spanning at least 3,400 years of written records (including texts in syllabic script), and 28 centuries (Archaic period - new) of written text with alphabet [1, 2]. The over 2500 years literary tradition of Greek is also notable. To aid those that are interested in using, developing or advancing the techniques for Greek processing, in this paper we survey related works and resources organized in categories. We hope this collection and categorization of works to be useful for students and researchers interested in NLP tasks, Information Retrieval and Knowledge Management for the Greek language.


Meltemi: The first open Large Language Model for Greek

Voukoutis, Leon, Roussis, Dimitris, Paraskevopoulos, Georgios, Sofianopoulos, Sokratis, Prokopidis, Prokopis, Papavasileiou, Vassilis, Katsamanis, Athanasios, Piperidis, Stelios, Katsouros, Vassilis

arXiv.org Artificial Intelligence

We describe the development and capabilities of Meltemi 7B, the first open Large Language Model for the Greek language. Meltemi 7B has 7 billion parameters and is trained on a 40 billion token Greek corpus. For the development of Meltemi 7B, we adapt Mistral, by continuous pretraining on the Greek Corpus. Meltemi 7B contains up-to-date information up to September 2023. Furthermore, we have translated and curated a Greek instruction corpus, which has been used for the instruction-tuning of a chat model, named Meltemi 7B Instruct. Special care has been given to the alignment and the removal of toxic content for the Meltemi 7B Instruct. The developed models are evaluated on a broad set of collected evaluation corpora, and examples of prompts and responses are presented. Both Meltemi 7B and Meltemi 7B Instruct are available at https://huggingface.co/ilsp under the Apache 2.0 license.


FarFetched: Entity-centric Reasoning and Claim Validation for the Greek Language based on Textually Represented Environments

Papadopoulos, Dimitris, Metropoulou, Katerina, Matsatsinis, Nikolaos, Papadakis, Nikolaos

arXiv.org Artificial Intelligence

Our collective attention span is shortened by the flood of online information. With \textit{FarFetched}, we address the need for automated claim validation based on the aggregated evidence derived from multiple online news sources. We introduce an entity-centric reasoning framework in which latent connections between events, actions, or statements are revealed via entity mentions and represented in a graph database. Using entity linking and semantic similarity, we offer a way for collecting and combining information from diverse sources in order to generate evidence relevant to the user's claim. Then, we leverage textual entailment recognition to quantitatively determine whether this assertion is credible, based on the created evidence. Our approach tries to fill the gap in automated claim validation for less-resourced languages and is showcased on the Greek language, complemented by the training of relevant semantic textual similarity (STS) and natural language inference (NLI) models that are evaluated on translated versions of common benchmarks.


GreekBART: The First Pretrained Greek Sequence-to-Sequence Model

Evdaimon, Iakovos, Abdine, Hadi, Xypolopoulos, Christos, Outsios, Stamatis, Vazirgiannis, Michalis, Stamou, Giorgos

arXiv.org Artificial Intelligence

The era of transfer learning has revolutionized the fields of Computer Vision and Natural Language Processing, bringing powerful pretrained models with exceptional performance across a variety of tasks. Specifically, Natural Language Processing tasks have been dominated by transformer-based language models. In Natural Language Inference and Natural Language Generation tasks, the BERT model and its variants, as well as the GPT model and its successors, demonstrated exemplary performance. However, the majority of these models are pretrained and assessed primarily for the English language or on a multilingual corpus. In this paper, we introduce GreekBART, the first Seq2Seq model based on BART-base architecture and pretrained on a large-scale Greek corpus. We evaluate and compare GreekBART against BART-random, Greek-BERT, and XLM-R on a variety of discriminative tasks. In addition, we examine its performance on two NLG tasks from GreekSUM, a newly introduced summarization dataset for the Greek language. The model, the code, and the new summarization dataset will be publicly available.